Search results for "Kernel embedding of distributions"

showing 10 items of 18 documents

Reproducing kernel hilbert spaces regression methods for genomic assisted prediction of quantitative traits.

2008

Abstract Reproducing kernel Hilbert spaces regression procedures for prediction of total genetic value for quantitative traits, which make use of phenotypic and genomic data simultaneously, are discussed from a theoretical perspective. It is argued that a nonparametric treatment may be needed for capturing the multiple and complex interactions potentially arising in whole-genome models, i.e., those based on thousands of single-nucleotide polymorphism (SNP) markers. After a review of reproducing kernel Hilbert spaces regression, it is shown that the statistical specification admits a standard mixed-effects linear model representation, with smoothing parameters treated as variance components.…

BiologyInvestigationsBayesian inferenceMachine learningcomputer.software_genreKernel principal component analysisChromosomessymbols.namesakeQuantitative Trait HeritableGeneticsAnimalsGeneticsGenomeModels GeneticRepresenter theorembusiness.industryHilbert spaceLinear modelBayes TheoremQuantitative Biology::GenomicsKernel embedding of distributionsKernel (statistics)symbolsPrincipal component regressionRegression AnalysisArtificial intelligencebusinesscomputerChickensGenetics
researchProduct

Optimized Kernel Entropy Components

2016

This work addresses two main issues of the standard Kernel Entropy Component Analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of by variance as in Kernel Principal Components Analysis. In this work, we propose an extension of the KECA method, named Optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular…

FOS: Computer and information sciencesComputer Networks and CommunicationsKernel density estimationMachine Learning (stat.ML)02 engineering and technologyKernel principal component analysisMachine Learning (cs.LG)Artificial IntelligencePolynomial kernelStatistics - Machine Learning0202 electrical engineering electronic engineering information engineeringMathematicsbusiness.industry020206 networking & telecommunicationsPattern recognitionComputer Science ApplicationsComputer Science - LearningKernel methodKernel embedding of distributionsVariable kernel density estimationRadial basis function kernelKernel smoother020201 artificial intelligence & image processingArtificial intelligencebusinessSoftwareIEEE Transactions on Neural Networks and Learning Systems
researchProduct

Semisupervised nonlinear feature extraction for image classification

2012

Feature extraction is of paramount importance for an accurate classification of remote sensing images. Techniques based on data transformations are widely used in this context. However, linear feature extraction algorithms, such as the principal component analysis and partial least squares, can address this problem in a suboptimal way because the data relations are often nonlinear. Kernel methods may alleviate this problem only when the structure of the data manifold is properly captured. However, this is difficult to achieve when small-size training sets are available. In these cases, exploiting the information contained in unlabeled samples together with the available training data can si…

Graph kernelComputer scienceFeature extractioncomputer.software_genreKernel principal component analysisk-nearest neighbors algorithmKernel (linear algebra)Polynomial kernelPartial least squares regressionLeast squares support vector machineCluster analysisTraining setContextual image classificationbusiness.industryDimensionality reductionPattern recognitionManifoldKernel methodKernel embedding of distributionsKernel (statistics)Principal component analysisRadial basis function kernelPrincipal component regressionData miningArtificial intelligencebusinesscomputer2012 IEEE International Geoscience and Remote Sensing Symposium
researchProduct

Kernel-Based Inference of Functions Over Graphs

2018

Abstract The study of networks has witnessed an explosive growth over the past decades with several ground-breaking methods introduced. A particularly interesting—and prevalent in several fields of study—problem is that of inferring a function defined over the nodes of a network. This work presents a versatile kernel-based framework for tackling this inference problem that naturally subsumes and generalizes the reconstruction approaches put forth recently for the signal processing by the community studying graphs. Both the static and the dynamic settings are considered along with effective modeling approaches for addressing real-world problems. The analytical discussion herein is complement…

Graph kernelTheoretical computer scienceComputer sciencebusiness.industryInference020206 networking & telecommunicationsPattern recognition02 engineering and technology01 natural sciencesGraph010104 statistics & probabilityKernel (linear algebra)Kernel methodPolynomial kernelString kernelKernel embedding of distributionsKernel (statistics)Radial basis function kernel0202 electrical engineering electronic engineering information engineeringArtificial intelligence0101 mathematicsTree kernelbusiness
researchProduct

Model selection based product kernel learning for regression on graphs

2013

The choice of a suitable graph kernel is intrinsically hard and often cannot be made in an informed manner for a given dataset. Methods for multiple kernel learning offer a possible remedy, as they combine and weight kernels on the basis of a labeled training set of molecules to define a new kernel. Whereas most methods for multiple kernel learning focus on learning convex linear combinations of kernels, we propose to combine kernels in products, which theoretically enables higher expressiveness. In experiments on ten publicly available chemical QSAR datasets we show that product kernel learning is on no dataset significantly worse than any of the competing kernel methods and on average the…

Graph kernelTraining setMultiple kernel learningComputer sciencebusiness.industryPattern recognitionSemi-supervised learningMachine learningcomputer.software_genreKernel (linear algebra)Kernel methodKernel embedding of distributionsPolynomial kernelKernel (statistics)Radial basis function kernelArtificial intelligenceTree kernelbusinesscomputerProceedings of the 28th Annual ACM Symposium on Applied Computing
researchProduct

A structural cluster kernel for learning on graphs

2012

In recent years, graph kernels have received considerable interest within the machine learning and data mining community. Here, we introduce a novel approach enabling kernel methods to utilize additional information hidden in the structural neighborhood of the graphs under consideration. Our novel structural cluster kernel (SCK) incorporates similarities induced by a structural clustering algorithm to improve state-of-the-art graph kernels. The approach taken is based on the idea that graph similarity can not only be described by the similarity between the graphs themselves, but also by the similarity they possess with respect to their structural neighborhood. We applied our novel kernel in…

Graph kernelbusiness.industryPattern recognitionComputingMethodologies_PATTERNRECOGNITIONKernel methodString kernelPolynomial kernelKernel embedding of distributionsRadial basis function kernelArtificial intelligenceTree kernelCluster analysisbusinessMathematicsProceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
researchProduct

Learning with the kernel signal to noise ratio

2012

This paper presents the application of the kernel signal to noise ratio (KSNR) in the context of feature extraction to general machine learning and signal processing domains. The proposed approach maximizes the signal variance while minimizes the estimated noise variance in a reproducing kernel Hilbert space (RKHS). The KSNR can be used in any kernel method to deal with correlated (possibly non-Gaussian) noise. We illustrate the method in nonlinear regression examples, dependence estimation and causal inference, nonlinear channel equalization, and nonlinear feature extraction from high-dimensional satellite images. Results show that the proposed KSNR yields more fitted solutions and extract…

Kernel methodSignal-to-noise ratioKernel embedding of distributionsPolynomial kernelbusiness.industryVariable kernel density estimationKernel (statistics)Radial basis function kernelPattern recognitionArtificial intelligencebusinessKernel principal component analysisMathematics2012 IEEE International Workshop on Machine Learning for Signal Processing
researchProduct

An Introduction to Kernel Methods

2009

Machine learning has experienced a great advance in the eighties and nineties due to the active research in artificial neural networks and adaptive systems. These tools have demonstrated good results in many real applications, since neither a priori knowledge about the distribution of the available data nor the relationships among the independent variables should be necessarily assumed. Overfitting due to reduced training data sets is controlled by means of a regularized functional which minimizes the complexity of the machine. Working with high dimensional input spaces is no longer a problem thanks to the use of kernel methods. Such methods also provide us with new ways to interpret the cl…

Mathematical optimizationbusiness.industryMachine learningcomputer.software_genreKernel principal component analysisKernel methodVariable kernel density estimationPolynomial kernelKernel embedding of distributionsKernel (statistics)Radial basis function kernelKernel smootherArtificial intelligencebusinesscomputerMathematics
researchProduct

Weighted samples, kernel density estimators and convergence

2003

This note extends the standard kernel density estimator to the case of weighted samples in several ways. In the first place I consider the obvious extension by substituting the simple sum in the definition of the estimator by a weighted sum, but I also consider other alternatives of introducing weights, based on adaptive kernel density estimators, and consider the weights as indicators of the informational content of the observations and in this sense as signals of the local density of the data. All these ideas are shown using the Penn World Table in the context of the macroeconomic convergence issue.

Statistics and ProbabilityEconomics and EconometricsMathematical optimizationKernel density estimationEstimatorMultivariate kernel density estimationKernel principal component analysisMathematics (miscellaneous)Penn World TableKernel embedding of distributionsVariable kernel density estimationKernel (statistics)Applied mathematicsSocial Sciences (miscellaneous)MathematicsEmpirical Economics
researchProduct

Gamma Kernel Intensity Estimation in Temporal Point Processes

2011

In this article, we propose a nonparametric approach for estimating the intensity function of temporal point processes based on kernel estimators. In particular, we use asymmetric kernel estimators characterized by the gamma distribution, in order to describe features of observed point patterns adequately. Some characteristics of these estimators are analyzed and discussed both through simulated results and applications to real data from different seismic catalogs.

Statistics and ProbabilityNonparametric statisticsEstimatorKernel principal component analysisPoint processVariable kernel density estimationKernel embedding of distributionsModeling and SimulationKernel (statistics)Bounded domainStatisticsGamma distributionGamma kernel estimatorIntensity functionTemporal point processes.Settore SECS-S/01 - StatisticaMathematicsCommunications in Statistics - Simulation and Computation
researchProduct